7 research outputs found
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots
The increasing number of robots in home environments leads to an emerging
coexistence between humans and robots. Robots undertake common tasks and
support the residents in their everyday life. People appreciate the presence of
robots in their environment as long as they keep the control over them. One
important aspect is the control of a robot's workspace. Therefore, we introduce
virtual borders to precisely and flexibly define the workspace of mobile
robots. First, we propose a novel framework that allows a person to
interactively restrict a mobile robot's workspace. To show the validity of this
framework, a concrete implementation based on visual markers is implemented.
Afterwards, the mobile robot is capable of performing its tasks while
respecting the new virtual borders. The approach is accurate, flexible and less
time consuming than explicit robot programming. Hence, even non-experts are
able to teach virtual borders to their robots which is especially interesting
in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure
Sensor system for development of perception systems for ATO
Developing AI systems for automatic train operation (ATO) requires developers to have a deep understanding of the human tasks they are trying to replace. This paper fills this gap and translates the regulatory requirements from the context of German railways for the AI developer community. As a result, tasks such as train’s path monitoring for collision prediction, signal detection, door operation, etc. are identified. Based on this analysis, a functionally justified sensor setup with detailed configuration requirements is presented. This setup was also evaluated by a survey within the railway industry. The evaluated sensors include RGB/IR cameras, LIDARs, radars and ultrasonic sensors. Calculations and estimates for the evaluated sensors are presented graphically and included in this paper. However, the ultimate sensor setup is still a subject of research. The results of this paper also address the lack of training and test datasets for railway AI systems. It is proposed to acquire research datasets that will allow the training of domain adaptation algorithms to transform other datasets, thus increasing the number of available datasets. The sensor setup is also recommended for such research datasets.</jats:p
Onboard Sensor Systems for Automatic Train Operation
This paper introduces the specific requirements of the domain of train operation and its regulatory framework to the AI community. It assesses sensor sets for driverless and unattended train operation. It lists functionally justified ranges of technical specifications for sensors of different types, which will generate input for AI perception algorithms (i.e. for signal and obstacle detection). Since an optimal sensor set is the subject of research, this paper provides the specification of a generic data acquisition platform as a crucial step. Some particular results are recommendations for the minimal resolution and shutter type for image sensors, as well as beam steering methods and resolutions for LiDARs